96 research outputs found

    Bayesian optimization for materials design

    Full text link
    We introduce Bayesian optimization, a technique developed for optimizing time-consuming engineering simulations and for fitting machine learning models on large datasets. Bayesian optimization guides the choice of experiments during materials design and discovery to find good material designs in as few experiments as possible. We focus on the case when materials designs are parameterized by a low-dimensional vector. Bayesian optimization is built on a statistical technique called Gaussian process regression, which allows predicting the performance of a new design based on previously tested designs. After providing a detailed introduction to Gaussian process regression, we introduce two Bayesian optimization methods: expected improvement, for design problems with noise-free evaluations; and the knowledge-gradient method, which generalizes expected improvement and may be used in design problems with noisy evaluations. Both methods are derived using a value-of-information analysis, and enjoy one-step Bayes-optimality

    Precalibrating an intermediate complexity climate model

    Get PDF
    Credible climate predictions require a rational quantification of uncertainty, but full Bayesian calibration requires detailed estimates of prior probability distributions and covariances, which are difficult to obtain in practice. We describe a simplified procedure, termed precalibration, which provides an approximate quantification of uncertainty in climate prediction, and requires only that uncontroversially implausible values of certain inputs and outputs are identified. The method is applied to intermediate-complexity model simulations of the Atlantic meridional overturning circulation (AMOC) and confirms the existence of a cliff-edge catastrophe in freshwaterforcing input space. When uncertainty in 14 further parameters is taken into account, an implausible, AMOC-off, region remains as a robust feature of the model dynamics, but its location is found to depend strongly on values of the other parameters

    Fuzzy min-max neural networks for categorical data: application to missing data imputation

    Get PDF
    The fuzzy min–max neural network classifier is a supervised learning method. This classifier takes the hybrid neural networks and fuzzy systems approach. All input variables in the network are required to correspond to continuously valued variables, and this can be a significant constraint in many real-world situations where there are not only quantitative but also categorical data. The usual way of dealing with this type of variables is to replace the categorical by numerical values and treat them as if they were continuously valued. But this method, implicitly defines a possibly unsuitable metric for the categories. A number of different procedures have been proposed to tackle the problem. In this article, we present a new method. The procedure extends the fuzzy min–max neural network input to categorical variables by introducing new fuzzy sets, a new operation, and a new architecture. This provides for greater flexibility and wider application. The proposed method is then applied to missing data imputation in voting intention polls. The micro data—the set of the respondents’ individual answers to the questions—of this type of poll are especially suited for evaluating the method since they include a large number of numerical and categorical attributes

    Design of Experiments for Screening

    Full text link
    The aim of this paper is to review methods of designing screening experiments, ranging from designs originally developed for physical experiments to those especially tailored to experiments on numerical models. The strengths and weaknesses of the various designs for screening variables in numerical models are discussed. First, classes of factorial designs for experiments to estimate main effects and interactions through a linear statistical model are described, specifically regular and nonregular fractional factorial designs, supersaturated designs and systematic fractional replicate designs. Generic issues of aliasing, bias and cancellation of factorial effects are discussed. Second, group screening experiments are considered including factorial group screening and sequential bifurcation. Third, random sampling plans are discussed including Latin hypercube sampling and sampling plans to estimate elementary effects. Fourth, a variety of modelling methods commonly employed with screening designs are briefly described. Finally, a novel study demonstrates six screening methods on two frequently-used exemplars, and their performances are compared

    Statistical Methods in Recent HIV Noninferiority Trials: Reanalysis of 11 Trials

    Get PDF
    Background: In recent years the ‘‘noninferiority’ ’ trial has emerged as the new standard design for HIV drug development among antiretroviral patients often with a primary endpoint based on the difference in success rates between the two treatment groups. Different statistical methods have been introduced to provide confidence intervals for that difference. The main objective is to investigate whether the choice of the statistical method changes the conclusion of the trials. Methods: We presented 11 trials published in 2010 using a difference in proportions as the primary endpoint. In these trials, 5 different statistical methods have been used to estimate such confidence intervals. The five methods are described and applied to data from the 11 trials. The noninferiority of the new treatment is not demonstrated if the prespecified noninferiority margin it includes in the confidence interval of the treatment difference. Results: Results indicated that confidence intervals can be quite different according to the method used. In many situations, however, conclusions of the trials are not altered because point estimates of the treatment difference were too far from the prespecified noninferiority margins. Nevertheless, in few trials the use of different statistical methods led to different conclusions. In particular the use of ‘‘exact’ ’ methods can be very confusing. Conclusion: Statistical methods used to estimate confidence intervals in noninferiority trials have a strong impact on th

    A modular analysis of the Auxin signalling network

    Get PDF
    Auxin is essential for plant development from embryogenesis onwards. Auxin acts in large part through regulation of transcription. The proteins acting in the signalling pathway regulating transcription downstream of auxin have been identified as well as the interactions between these proteins, thus identifying the topology of this network implicating 54 Auxin Response Factor (ARF) and Aux/IAA (IAA) transcriptional regulators. Here, we study the auxin signalling pathway by means of mathematical modeling at the single cell level. We proceed analytically, by considering the role played by five functional modules into which the auxin pathway can be decomposed: the sequestration of ARF by IAA, the transcriptional repression by IAA, the dimer formation amongst ARFs and IAAs, the feedback loop on IAA and the auxin induced degradation of IAA proteins. Focusing on these modules allows assessing their function within the dynamics of auxin signalling. One key outcome of this analysis is that there are both specific and overlapping functions between all the major modules of the signaling pathway. This suggests a combinatorial function of the modules in optimizing the speed and amplitude of auxin-induced transcription. Our work allows identifying potential functions for homo- and hetero-dimerization of transcriptional regulators, with ARF:IAA, IAA:IAA and ARF:ARF dimerization respectively controlling the amplitude, speed and sensitivity of the response and a synergistic effect of the interaction of IAA with transcriptional repressors on these characteristics of the signaling pathway. Finally, we also suggest experiments which might allow disentangling the structure of the auxin signaling pathway and analysing further its function in plants
    corecore